14 research outputs found

    Few-Shot Visual Grounding for Natural Human-Robot Interaction

    Get PDF
    Natural Human-Robot Interaction (HRI) is one of the key components for service robots to be able to work in human-centric environments. In such dynamic environments, the robot needs to understand the intention of the user to accomplish a task successfully. Towards addressing this point, we propose a software architecture that segments a target object from a crowded scene, indicated verbally by a human user. At the core of our system, we employ a multi-modal deep neural network for visual grounding. Unlike most grounding methods that tackle the challenge using pre-trained object detectors via a two-stepped process, we develop a single stage zero-shot model that is able to provide predictions in unseen data. We evaluate the performance of the proposed model on real RGB-D data collected from public scene datasets. Experimental results showed that the proposed model performs well in terms of accuracy and speed, while showcasing robustness to variation in the natural language input.Comment: 6 pages, 4 figures, ICARSC2021 accepte

    A Strong Transfer Baseline for RGB-D Fusion in Vision Transformers

    Get PDF

    A Hybrid Compositional Reasoning Approach for Interactive Robot Manipulation

    Get PDF
    — In this paper we present a neuro-symbolic (hybrid)compositional reasoning model for coupling language-guidedvisual reasoning with robot manipulation. A non-expert humanuser can prompt the robot agent using natural language,providing a referring expression, a question or a grasp actioninstruction. The model tackles all cases in a task-agnosticfashion through the utilization of a shared library of primitiveskills. Each primitive handles an independent sub-task, suchas reasoning about visual attributes, spatial relation comprehension, logic and enumeration, as well as arm control. Alanguage parser maps the input query to an executable programcomposed of such primitives depending on the context. Whilesome primitives are purely symbolic operations (e.g. counting),others are trainable neural functions (e.g. image/word grounding), therefore marrying the interpretability and systematicgeneralization benefits of discrete symbolic approaches withthe scalability and representational power of deep networks.We generate a synthetic dataset of tabletop scenes to train ourapproach and perform several evaluation experiments for visualreasoning. Results show that the proposed method achieves veryhigh accuracy while being transferable to real-world scenes withfew-shot visual fine-tuning. Finally, we integrate our methodwith a robot framework and demonstrate how it can serveas an interpretable solution for an interactive object pickingtask, both in simulation and with a real robot. Supplementarymaterial is available in this https URL

    A Strong Transfer Baseline for RGB-D Fusion in Vision Transformers

    Get PDF
    The Vision Transformer (ViT) architecture has recently established its place in the computer vision literature, with multiple architectures for recognition of image data or other visual modalities. However, training ViTs for RGB-D object recognition remains an understudied topic, viewed in recent literature only through the lens of multi-task pretraining in multiple modalities. Such approaches are often computationally intensive and have not yet been applied for challenging object-level classification tasks. In this work, we propose a simple yet strong recipe for transferring pretrained ViTs in RGB-D domains for single-view 3D object recognition, focusing on fusing RGB and depth representations encoded jointly by the ViT. Compared to previous works in multimodal Transformers, the key challenge here is to use the atested flexibility of ViTs to capture cross-modal interactions at the downstream and not the pretraining stage. We explore which depth representation is better in terms of resulting accuracy and compare two methods for injecting RGB-D fusion within the ViT architecture (i.e., early vs. late fusion). Our results in the Washington RGB-D Objects dataset demonstrates that in such RGB → RGB-D scenarios, late fusion techniques work better than most popularly employed early fusion. With our transfer baseline, adapted ViTs score up to 95.1\% top-1 accuracy in Washington, achieving new state-of-the-art results in this benchmark. We additionally evaluate our approach with an open-ended lifelong learning protocol, where we show that our adapted RGB-D encoder leads to features that outperform unimodal encoders, even without explicit fine-tuning. We further integrate our method with a robot framework and demonstrate how it can serve as a perception utility in an interactive robot learning scenario, both in simulation and with a real robot

    Simultaneous multi-view object recognition and grasping in open-ended domains

    Get PDF
    To aid humans in everyday tasks, robots need to know which objects exist in the scene, where they are, and how to grasp and manipulate them in different situations. Therefore, object recognition and grasping are two key functionalities for autonomous robots. Most state-of-the-art approaches treat object recognition and grasping as two separate problems, even though both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot encounters new object categories, it must be retrained to incorporate new information without catastrophic forgetting. In order to resolve this problem, we propose a deep learning architecture with an augmented memory capacity to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as output. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings. A video of these experiments is available online at: this https UR
    corecore